Goto

Collaborating Authors

 information campaign


The Generative AI Boom Could Fuel a New International Arms Race

WIRED

Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology. Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers. Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Google's BERT, a precursor to the more powerful and capable language models that power chatbots like ChatGPT. "There's no evidence of it being done right now," says William Marcellino, an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. "Coming up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrative--I think that it's qualitatively and quantitatively different," Marcellino says.


It Costs Just $400 to Build an AI Disinformation Machine

WIRED

In May, Sputnik International, a state-owned Russian media outlet, posted a series of tweets lambasting US foreign policy and attacking the Biden administration. Each prompted a curt but well-crafted rebuttal from an account called CounterCloud, sometimes including a link to a relevant news or opinion article. It generated similar responses to tweets by the Russian embassy and Chinese news outlets criticizing the US. Russian criticism of the US is far from unusual, but CounterCloud's material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation. Paw did not post the CounterCloud tweets and articles publicly but provided them to WIRED and also produced a video outlining the project.


Empirically grounded agent-based policy evaluation of the adoption of sustainable lighting under the European Ecodesign Directive

Schoenmacker, Gido H., Jager, Wander, Verbrugge, Rineke

arXiv.org Artificial Intelligence

Twelve years ago, the European Union began with the gradual phase-out of energy-inefficient incandescent light bulbs under the Ecodesign Directive. In this work, we implement an agent-based simulation to model the consumer behaviour in the EU lighting market with the goal to explain consumer behaviour and explore alternative policies. Agents are based on the Consumat II model, have individual preferences based on empirical market research, gather experience from past actions, and socially interact with each other in a dynamic environment. Our findings suggest that the adoption of energy-friendly lighting alternatives was hindered by a low level of consumer interest combined with high-enough levels of satisfaction about incandescent bulbs and that information campaigns can partially address this. These findings offer insight into both individual-level driving forces of behaviour and society-level outcomes in a niche market. With this, our work demonstrates the strengths of agent-based models for policy generation and evaluation.


Ferrara

AAAI Conferences

Information spreading on social media contributes to the formation of collective opinions. Millions of social media users are exposed every day to popular memes -- some generated organically by grassroots activity, others sustained by advertising, information campaigns or more or less transparent coordinated efforts. While most information campaigns are benign, some may have nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. This poses a crucial technological challenge with deep social implications: can we detect whether the spreading of a viral meme is being sustained by a promotional campaign? Here we study trending memes that attract attention either organically, or by means of advertisement. We designed a machine learning framework capable to detect promoted campaigns and separate them from organic ones in their early stages. Using a dataset of millions of posts associated with trending Twitter hashtags, we prove that remarkably accurate early detection is possible, achieving 95% AUC score. Feature selection analysis reveals that network diffusion patterns and content cues are powerful early detection signals.


Joint Chiefs' Information Officer: U.S. Is Behind on Information Warfare. AI Can Help

#artificialintelligence

The United States needs a better strategy and more advanced tools for information operations, Lt. Gen. Dennis Crall, the Joint Staff's chief information officer, said Thursday. The government has become slower and less confident in its approach, a reticence it can't afford as artificial intelligence drastically increases the pace of messaging and information campaigns, said Crall, who is also the Joit Staff's director for command, control, communications, computers, and cyber. . "The speed at which machines and AI won some of these information campaigns changes the game drastically for us. If we study, if we're hesitant, if we don't have good left and right lateral limits, if every operation requires a new set of permissions...We're never going to compete." Crall made his remarks at the NDIA conference for Special Operations and Low Intensity Conflict, or SOLIC.


The AI Company Helping the Pentagon Assess Disinfo Campaigns

WIRED

In September, Azerbaijan and Armenia renewed fighting over Nagorno-Karabakh, a disputed territory in the Caucasus mountains. By then, an information warfare campaign over the region had been underway for several months. The campaign was identified using artificial intelligence technology being developed for US Special Operations Command (SOCOM), which oversees US special forces operations. The AI system, from Primer, a company focused on the intelligence industry, identified key themes in the information campaign by analyzing thousands of public news sources. In practice, Primer's system can analyze classified information too.


Language-Generating A.I. Is a Free Speech Nightmare

Slate

What in the name of Paypal and/or Palantir did you just say about me, you filthy degenerate? I'll have you know I'm the Crown Prince of Silicon Valley, and I've been involved in numerous successful tech startups, and I have over $1B in liquid funds. I've used that money to promote heterodox positions on human enhancement, control political arenas, and am experimenting with mind uploading. I'm also trained in classical philosophy and was recently ranked the most influential libertarian in the world by Google. You are nothing to me but just another alternative future. I will wipe you out with a precision of simulation the likes of which has never been seen before, mark my words.